|
The generalization error of a machine learning model is a function that measures how well a learning machine generalizes to unseen data. It is measured as the distance between the error on the training set and the test set and is averaged over the entire set of possible training data that can be generated after each iteration of the learning process. It has this name because this function indicates the capacity of a machine that learns with the specified algorithm to infer a rule (or ''generalize'') that is used by the teacher machine to generate data based only on a few examples. The theoretical model assumes a probability distribution of the examples, and a function giving the exact target. The model can also include noise in the example (in the input and/or target output). The generalization error is usually defined as the expected value of the square of the difference between the learned function and the exact target (mean-square error). In practical cases, the distribution and target are unknown; statistical estimates are used. The performance of a machine learning algorithm is measured by plots of the generalization error values through the learning process and are called learning curve. The generalization error of a perceptron is the probability of the student perceptron to classify an example differently from the teacher and is given by the overlap of the student and teacher synaptic vectors and is a function of their scalar product. == See also == *Bias-variance dilemma *The problem of induction *Generalization (logic) *Hasty generalization 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Generalization error」の詳細全文を読む スポンサード リンク
|